Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(2): e24750, 2024 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-38312568

RESUMO

Objective: Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). Design: Video recordings were created by dubbing the existing audio files. Sample: Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. Results: Lipreading ability (Vo) ranged from 1 % to 77%-word comprehension. The absolute AV benefit was 9.25 dB SPL in quiet and 4.6 dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. Conclusions: The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.

2.
Sci Rep ; 11(1): 956, 2021 01 13.
Artigo em Inglês | MEDLINE | ID: mdl-33441758

RESUMO

Auditory roughness elicits aversion, and higher activation in cerebral areas involved in threat processing, but its link with defensive behavior is unknown. Defensive behaviors are triggered by intrusions into the space immediately surrounding the body, called peripersonal space (PPS). Integrating multisensory information in PPS is crucial to assure the protection of the body. Here, we assessed the behavioral effects of roughness on auditory-tactile integration, which reflects the monitoring of this multisensory region of space. Healthy human participants had to detect as fast as possible a tactile stimulation delivered on their hand while an irrelevant sound was approaching them from the rear hemifield. The sound was either a simple harmonic sound or a rough sound, processed through binaural rendering so that the virtual sound source was looming towards participants. The rough sound speeded tactile reaction times at a farther distance from the body than the non-rough sound. This indicates that PPS, as estimated here via auditory-tactile integration, is sensitive to auditory roughness. Auditory roughness modifies the behavioral relevance of simple auditory events in relation to the body. Even without emotional or social contextual information, auditory roughness constitutes an innate threat cue that elicits defensive responses.


Assuntos
Comportamento/fisiologia , Localização de Som/fisiologia , Percepção Espacial/fisiologia , Estimulação Acústica/métodos , Adolescente , Adulto , Feminino , Mãos/fisiologia , Humanos , Masculino , Espaço Pessoal , Estimulação Física/métodos , Tempo de Reação/fisiologia , Som , Tato/fisiologia , Adulto Jovem
3.
J Acoust Soc Am ; 147(1): EL55, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-32006981

RESUMO

A reproducible method to measure the intelligibility of communication systems is required to assess their efficiency. The current study seeks to develop a French version of the Modified Rhyme Test (MRT) [(House, Williams, Heker, and Kryter (1965). J. Acoust. Soc. Am. 37, 158-66], an intelligibility test composed of 50 six-word lists, originally developed for military applications and now widely used. An evaluation of the authors' French MRT was carried out, reproducing the original experimental conditions used by House and colleagues. Very similar results were found between the original MRT and the French MRT, validating the latter for further use and dissemination.

4.
Sci Rep ; 9(1): 8005, 2019 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-31142750

RESUMO

Human listeners are able to recognize accurately an impressive range of complex sounds, such as musical instruments or voices. The underlying mechanisms are still poorly understood. Here, we aimed to characterize the processing time needed to recognize a natural sound. To do so, by analogy with the "rapid visual sequential presentation paradigm", we embedded short target sounds within rapid sequences of distractor sounds. The core hypothesis is that any correct report of the target implies that sufficient processing for recognition had been completed before the time of occurrence of the subsequent distractor sound. We conducted four behavioral experiments using short natural sounds (voices and instruments) as targets or distractors. We report the effects on performance, as measured by the fastest presentation rate for recognition, of sound duration, number of sounds in a sequence, the relative pitch between target and distractors and target position in the sequence. Results showed a very rapid auditory recognition of natural sounds in all cases. Targets could be recognized at rates up to 30 sounds per second. In addition, the best performance was observed for voices in sequences of instruments. These results give new insights about the remarkable efficiency of timbre processing in humans, using an original behavioral paradigm to provide strong constraints on future neural models of sound recognition.


Assuntos
Percepção Auditiva/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Voz/fisiologia , Estimulação Acústica , Córtex Cerebelar/fisiologia , Feminino , Humanos , Masculino , Música , Discriminação da Altura Tonal/fisiologia , Reconhecimento Psicológico/fisiologia , Som , Adulto Jovem
5.
Sci Rep ; 7(1): 11526, 2017 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-28912437

RESUMO

In human listeners, the temporal voice areas (TVAs) are regions of the superior temporal gyrus and sulcus that respond more to vocal sounds than a range of nonvocal control sounds, including scrambled voices, environmental noises, and animal cries. One interpretation of the TVA's selectivity is based on low-level acoustic cues: compared to control sounds, vocal sounds may have stronger harmonic content or greater spectrotemporal complexity. Here, we show that the right TVA remains selective to the human voice even when accounting for a variety of acoustical cues. Using fMRI, single vowel stimuli were contrasted with single notes of musical instruments with balanced harmonic-to-noise ratios and pitches. We also used "auditory chimeras", which preserved subsets of acoustical features of the vocal sounds. The right TVA was preferentially activated only for the natural human voice. In particular, the TVA did not respond more to artificial chimeras preserving the exact spectral profile of voices. Additional acoustic measures, including temporal modulations and spectral complexity, could not account for the increased activation. These observations rule out simple acoustical cues as a basis for voice selectivity in the TVAs.


Assuntos
Percepção Auditiva , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
6.
J Assoc Res Otolaryngol ; 18(3): 457-464, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28101695

RESUMO

In a multi-talker situation, spatial separation between talkers reduces cognitive processing load: this is the "spatial release of cognitive load". The present study investigated the role played by the relative levels of the talkers on this spatial release of cognitive load. During the experiment, participants had to report the speech emitted by a target talker in the presence of a concurrent masker talker. The spatial separation (0° and 120° angular distance in azimuth) and the relative levels of the talkers (adverse, intermediate, and favorable target-to-masker ratio) were manipulated. The cognitive load was assessed with a prefrontal functional near-infrared spectroscopy. Data from 14 young normal-hearing listeners revealed that the target-to-masker ratio had a direct impact on the spatial release of cognitive load. Spatial separation significantly reduced the prefrontal activity only for the intermediate target-to-masker ratio and had no effect on prefrontal activity for the favorable and the adverse target-to-masker ratios. Therefore, the relative levels of the talkers might be a key point to determine the spatial release of cognitive load and more specifically the prefrontal activity induced by spatial cues in multi-talker situations.


Assuntos
Cognição/fisiologia , Ruído/efeitos adversos , Córtex Pré-Frontal/fisiologia , Percepção da Fala/fisiologia , Feminino , Neuroimagem Funcional , Voluntários Saudáveis , Humanos , Masculino , Espectroscopia de Luz Próxima ao Infravermelho , Testes de Discriminação da Fala , Adulto Jovem
7.
Sci Rep ; 6: 26336, 2016 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-27193919

RESUMO

Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.


Assuntos
Estimulação Acústica/métodos , Transtorno do Espectro Autista/fisiopatologia , Tempo de Reação/fisiologia , Adulto , Feminino , Humanos , Masculino , Reconhecimento Fisiológico de Modelo , Reconhecimento Psicológico , Voz , Adulto Jovem
8.
PLoS One ; 11(3): e0150313, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26950589

RESUMO

Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.


Assuntos
Percepção Auditiva/fisiologia , Som , Estimulação Acústica , Acústica , Feminino , Humanos , Masculino , Modelos Biológicos , Adulto Jovem
9.
J Acoust Soc Am ; 137(2): 911-22, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25698023

RESUMO

Listeners' ability to discriminate unfamiliar voices is often susceptible to the effects of manipulations of acoustic characteristics of the utterances. This vulnerability was quantified within a task in which participants determined if two utterances were spoken by the same or different speakers. Results of this task were analyzed in relation to a set of historical and novel parameters in order to hypothesize the role of those parameters in the decision process. Listener performance was first measured in a baseline task with unmodified stimuli, and then compared to responses with resynthesized stimuli under three conditions: (1) normalized mean-pitch; (2) normalized duration; and (3) normalized linear predictive coefficients (LPCs). The results of these experiments suggest that perceptual speaker discrimination is robust to acoustic changes, though mean-pitch and LPC modifications are more detrimental to a listener's ability to successfully identify same or different speaker pairings. However, this susceptibility was also found to be partially dependent on the specific speaker and utterances.


Assuntos
Sinais (Psicologia) , Discriminação Psicológica , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Acústica , Adolescente , Audiometria da Fala , Feminino , Humanos , Julgamento , Masculino , Reconhecimento Fisiológico de Modelo , Percepção da Altura Sonora , Reconhecimento Psicológico , Fatores de Tempo , Adulto Jovem
10.
J Acoust Soc Am ; 135(3): 1380-91, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24606276

RESUMO

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.


Assuntos
Sinais (Psicologia) , Discriminação Psicológica , Discriminação da Altura Tonal , Reconhecimento Psicológico , Estimulação Acústica , Adulto , Análise de Variância , Audiometria , Retroalimentação Psicológica , Feminino , Humanos , Masculino , Música , Filtro Sensorial , Detecção de Sinal Psicológico , Canto , Espectrografia do Som , Fatores de Tempo , Qualidade da Voz , Adulto Jovem
11.
Adv Exp Med Biol ; 787: 443-51, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23716251

RESUMO

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.


Assuntos
Estimulação Acústica/métodos , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Altura Sonora/fisiologia , Psicoacústica , Percepção da Fala/fisiologia , Voz/fisiologia , Adulto , Humanos , Música , Mascaramento Perceptivo/fisiologia , Reconhecimento Psicológico/fisiologia , Acústica da Fala , Adulto Jovem
12.
Cyberpsychol Behav Soc Netw ; 16(2): 145-52, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23425570

RESUMO

Traditionally, virtual reality (VR) exposure-based treatment concentrates primarily on the presentation of a high-fidelity visual experience. However, adequately combining the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention. We present the design and usability testing of an auditory-visual interactive environment for investigating VR exposure-based treatment for cynophobia. The specificity of our application involves 3D sound, allowing the presentation and spatial manipulations of a fearful stimulus in the auditory modality and in the visual modality. We conducted an evaluation test with 10 participants who fear dogs to assess the capacity of our auditory-visual virtual environment (VE) to generate fear reactions. The specific perceptual characteristics of the dog model that were implemented in the VE were highly arousing, suggesting that VR is a promising tool to treat cynophobia.


Assuntos
Simulação por Computador , Medo/psicologia , Transtornos Fóbicos/terapia , Interface Usuário-Computador , Animais , Cães , Humanos , Transtornos Fóbicos/psicologia
13.
J Acoust Soc Am ; 131(5): 4124-33, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22559384

RESUMO

Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.


Assuntos
Percepção Auditiva/fisiologia , Música , Tempo de Reação/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Distorção da Percepção/fisiologia
14.
Front Hum Neurosci ; 5: 158, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22174701

RESUMO

In this review paper aimed at the non-specialist, we explore the use that neuroscientists and musicians have made of perceptual illusions based on ambiguity. The pivotal issue is auditory scene analysis (ASA), or what enables us to make sense of complex acoustic mixtures in order to follow, for instance, a single melody in the midst of an orchestra. In general, ASA uncovers the most likely physical causes that account for the waveform collected at the ears. However, the acoustical problem is ill-posed and it must be solved from noisy sensory input. Recently, the neural mechanisms implicated in the transformation of ambiguous sensory information into coherent auditory scenes have been investigated using so-called bistability illusions (where an unchanging ambiguous stimulus evokes a succession of distinct percepts in the mind of the listener). After reviewing some of those studies, we turn to music, which arguably provides some of the most complex acoustic scenes that a human listener will ever encounter. Interestingly, musicians will not always aim at making each physical source intelligible, but rather express one or more melodic lines with a small or large number of instruments. By means of a few musical illustrations and by using a computational model inspired by neuro-physiological principles, we suggest that this relies on a detailed (if perhaps implicit) knowledge of the rules of ASA and of its inherent ambiguity. We then put forward the opinion that some degree perceptual ambiguity may participate in our appreciation of music.

15.
J Acoust Soc Am ; 127(3): EL105-10, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20329809

RESUMO

Simple reaction times (RTs) were used to measure differences in processing time between natural animal sounds and artificial sounds. When the artificial stimuli were sequences of short tone pulses, the animal sounds were detected faster than the artificial sounds. The animal sounds were then compared with acoustically modified versions (white noise modulated by the temporal envelope of the animal sounds). No differences in RTs were observed between the animal sounds and their modified counterparts. These results show that the fast detection observed for natural sounds, in the present task, could be explained by their acoustic properties.


Assuntos
Percepção Auditiva/fisiologia , Ruído , Tempo de Reação/fisiologia , Vocalização Animal , Estimulação Acústica/métodos , Acústica , Adulto , Animais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reconhecimento Psicológico/fisiologia
16.
PLoS One ; 4(4): e5256, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19384414

RESUMO

BACKGROUND: Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. METHODOLOGY/FINDINGS: We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. CONCLUSIONS/SIGNIFICANCE: These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.


Assuntos
Comunicação Animal , Percepção Auditiva , Percepção Visual , Adulto , Animais , Feminino , Humanos , Masculino
17.
Exp Brain Res ; 194(1): 91-102, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19093105

RESUMO

Recognizing a natural object requires one to pool information from various sensory modalities, and to ignore information from competing objects. That the same semantic knowledge can be accessed through different modalities makes it possible to explore the retrieval of supramodal object concepts. Here, object-recognition processes were investigated by manipulating the relationships between sensory modalities, specifically, semantic content, and spatial alignment between auditory and visual information. Experiments were run under realistic virtual environment. Participants were asked to react as fast as possible to a target object presented in the visual and/or the auditory modality and to inhibit a distractor object (go/no-go task). Spatial alignment had no effect on object-recognition time. The only spatial effect observed was a stimulus-response compatibility between the auditory stimulus and the hand position. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and visual targets. Interestingly, this bimodal facilitation effect was twice as large as found in previous studies that also used information-rich stimuli. An interference effect was observed (i.e. longer reaction times to semantically incongruent stimuli than to the corresponding unimodal stimulus) only when the distractor was auditory. When the distractor was visual, the semantic incongruence did not interfere with object recognition. Our results show that immersive displays with large visual stimuli may provide large multimodal integration effects, and reveal a possible asymmetry in the attentional filtering of irrelevant auditory and visual information.


Assuntos
Percepção Auditiva , Reconhecimento Psicológico , Percepção Visual , Estimulação Acústica , Adulto , Análise de Variância , Feminino , Mãos , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Percepção Espacial
18.
J Exp Psychol Appl ; 14(3): 201-12, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18808274

RESUMO

It is well-established that subjective judgments of perceived urgency of alarm sounds can be affected by acoustic parameters. In this study, the authors investigated an objective measurement, the reaction time (RT), to test the effectiveness of temporal parameters of sounds in the context of warning sounds. Three experiments were performed using a RT paradigm, with two different concurrent visuomotor tracking tasks simulating driving conditions. Experiments 1 and 2 show that RT decreases as interonset interval (IOI) decreases, where IOI is defined as the time elapsed from the onset of one sound pulse to the onset of the next. Experiment 3 shows that temporal irregularity between pulses can capture a listener's attention. These findings lead to concrete recommendations: IOI can be used to modulate warning sound urgency; and temporal irregularity can provoke an arousal effect in listeners. The authors also argue that the RT paradigm provides a useful tool for clarifying some of the factors involved in alarm processing.


Assuntos
Nível de Alerta , Atenção , Percepção Auditiva , Condução de Veículo/psicologia , Desempenho Psicomotor , Tempo de Reação , Adulto , Simulação por Computador , Feminino , Humanos , Julgamento , Percepção Sonora , Masculino , Percepção de Movimento , Reconhecimento Visual de Modelos , Percepção do Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...